skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Davis, Derek"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We present a new method which accounts for changes in the properties of gravitational-wave detector noise over time in the PyCBC search for gravitational waves from compact binary coalescences. We use information from LIGO data quality streams that monitor the status of each detector and its environment to model changes in the rate of noise in each detector. These data quality streams allow candidates identified in the data during periods of detector malfunctions to be more efficiently rejected as noise. This method allows data from machine learning predictions of the detector state to be included as part of the PyCBC search, increasing the total number of detectable gravitational-wave signals by up to 5%. When both machine learning classifications and manually generated flags are used to search data from LIGO-Virgo’s third observing run, the total number of detectable gravitational-wave signals is increased by up to 20% compared to not using any data quality streams. We also show how this method is flexible enough to include information from large numbers of additional arbitrary data streams that may be able to further increase the sensitivity of the search. 
    more » « less
  2. Abstract The Gravity Spy project aims to uncover the origins of glitches, transient bursts of noise that hamper analysis of gravitational-wave data. By using both the work of citizen-science volunteers and machine learning algorithms, the Gravity Spy project enables reliable classification of glitches. Citizen science and machine learning are intrinsically coupled within the Gravity Spy framework, with machine learning classifications providing a rapid first-pass classification of the dataset and enabling tiered volunteer training, and volunteer-based classifications verifying the machine classifications, bolstering the machine learning training set and identifying new morphological classes of glitches. These classifications are now routinely used in studies characterizing the performance of the LIGO gravitational-wave detectors. Providing the volunteers with a training framework that teaches them to classify a wide range of glitches, as well as additional tools to aid their investigations of interesting glitches, empowers them to make discoveries of new classes of glitches. This demonstrates that, when giving suitable support, volunteers can go beyond simple classification tasks to identify new features in data at a level comparable to domain experts. The Gravity Spy project is now providing volunteers with more complicated data that includes auxiliary monitors of the detector to identify the root cause of glitches. 
    more » « less
  3. This data set contains the individual classifications that the Gravity Spy citizen science volunteers made for glitches through 20 July 2024. Classifications made by science team members or in testing workflows have been removed as have classifications of glitches lacking a Gravity Spy identifier. See Zevin et al. (2017) for an explanation of the citizen science task and classification interface. Data about glitches with machine-learning labels are provided in an earlier data release (Glanzer et al., 2021). Final classifications combining ML and volunteer classifications are provided in Zevin et al. (2022).  22 of the classification labels match the labels used in the earlier data release, namely 1080Lines, 1400Ripples, Air_Compressor, Blip, Chirp, Extremely_Loud, Helix, Koi_Fish, Light_Modulation, Low_Frequency_Burst, Low_Frequency_Lines, No_Glitch, None_of_the_Above, Paired_Doves, Power_Line, Repeating_Blips, Scattered_Light, Scratchy, Tomte, Violin_Mode, Wandering_Line and Whistle. One glitch class that was added to the machine-learning classification has not been added to the Zooniverse project and so does not appear in this file, namely Blip_Low_Frequency. Four classes were added to the citizen science platform but not to the machine learning model and so have only volunteer labels, namely 70HZLINE, HIGHFREQUENCYBURST, LOWFREQUENCYBLIP and PIZZICATO. The glitch class Fast_Scattering added to the machine-learning classification has an equivalent volunteer label CROWN, which is used here (Soni et al. 2021). Glitches are presented to volunteers in a succession of workflows. Workflows include glitches classified by a machine learning classifier as being likely to be in a subset of classes and offer the option to classify only those classes plus None_of_the_Above. Each level includes the classes available in lower levels. The top level does not add new classification options but includes all glitches, including those for which the machine learning model is uncertain of the class. As the classes available to the volunteers change depending on the workflow, a glitch might be classified as None_of_the_Above in a lower workflow and subsequently as a different class in a higher workflow. Workflows and available classes are shown in the table below.  Workflow ID Name Number of glitch classes Glitches added 1610  Level 1 3 Blip, Whistle, None_of_the_Above 1934 Level 2 6 Koi_Fish, Power_Line, Violin_Mode 1935 Level 3 10 Chirp, Low_Frequency_Burst, No_Glitch, Scattered_Light 2360 Original level 4 22 1080Lines, 1400Ripples, Air_Compressor, Extremely_Loud, Helix, Light_Modulation, Low_Frequency_Lines, Paired_Doves, Repeating_Blips, Scratchy, Tomte, Wandering_Line 7765 New level 4 15 1080Lines, Extremely_Loud, Low_Frequency_Lines, Repeating_Blips, Scratchy 2117 Original level 5 22 No new glitch classes 7766 New level 5 27 1400Ripples, Air_Compressor, Paired_Doves, Tomte, Wandering_Line, 70HZLINE, CROWN, HIGHFREQUENCYBURST, LOWFREQUENCYBLIP, PIZZICATO 7767 Level 6 27 No new glitch classes Description of data fields Classification_id: a unique identifier for the classification. A volunteer may choose multiple classes for a glitch when classifying, in which case there will be multiple rows with the same classification_id. Subject_id: a unique identifier for the glitch being classified. This field can be used to join the classification to data about the glitch from the prior data release.  User_hash: an anonymized identifier for the user making the classification or for anonymous users an identifier that can be used to track the user within a session but which may not persist across sessions.  Anonymous_user: True if the classification was made by a non-logged in user.  Workflow: The Gravity Spy workflow in which the classification was made.  Workflow_version: The version of the workflow. Timestamp: Timestamp for the classification.  Classification: Glitch class selected by the volunteer.  Related datasets For machine learning classifications on all glitches in O1, O2, O3a, and O3b, please see Gravity Spy Machine Learning Classifications on Zenodo For classifications of glitches combining machine learning and volunteer classifications, please see Gravity Spy Volunteer Classifications of LIGO Glitches from Observing Runs O1, O2, O3a, and O3b. For the training set used in Gravity Spy machine learning algorithms, please see Gravity Spy Training Set on Zenodo. For detailed information on the training set used for the original Gravity Spy machine learning paper, please see Machine learning for Gravity Spy: Glitch classification and dataset on Zenodo. 
    more » « less
  4. Abstract Ground-based gravitational-wave detectors like Cosmic Explorer (CE) can be tuned to improve their sensitivity at high or low frequencies by tuning the response of the signal extraction cavity. Enhanced sensitivity above 2 kHz enables measurements of the post-merger gravitational-wave spectrum from binary neutron star mergers, which depends critically on the unknown equation of state of hot, ultra-dense matter. Improved sensitivity below 500 Hz favors precision tests of extreme gravity with black hole ringdown signals and improves the detection prospects while facilitating an improved measurement of source properties for compact binary inspirals at cosmological distances. At intermediate frequencies, a more sensitive detector can better measure the tidal properties of neutron stars. We present and characterize the performance of tuned CE configurations that are designed to optimize detections across different astrophysical source populations. These tuning options give CE the flexibility to target a diverse set of science goals with the same detector infrastructure. We find that a 40 km CE detector outperforms a 20 km in all key science goals other than access to post-merger physics. This suggests that CE should include at least one 40 km facility. 
    more » « less
  5. Abstract The collection of gravitational waves (GWs) that are either too weak or too numerous to be individually resolved is commonly referred to as the gravitational-wave background (GWB). A confident detection and model-driven characterization of such a signal will provide invaluable information about the evolution of the universe and the population of GW sources within it. We present a new, user-friendly, Python-based package for GW data analysis to search for an isotropic GWB in ground-based interferometer data. We employ cross-correlation spectra of GW detector pairs to construct an optimal estimator of the Gaussian and isotropic GWB, and Bayesian parameter estimation to constrain GWB models. The modularity and clarity of the code allow for both a shallow learning curve and flexibility in adjusting the analysis to one’s own needs. We describe the individual modules that make up pygwb , following the traditional steps of stochastic analyses carried out within the LIGO, Virgo, and KAGRA Collaboration. We then describe the built-in pipeline that combines the different modules and validate it with both mock data and real GW data from the O3 Advanced LIGO and Virgo observing run. We successfully recover all mock data injections and reproduce published results. 
    more » « less